Goto

Collaborating Authors

 graph random neural network



Review for NeurIPS paper: Graph Random Neural Networks for Semi-Supervised Learning on Graphs

Neural Information Processing Systems

Weaknesses: The proposed methods are not that novel. More specifically: (1) It seems that the consistency regularization is a general framework that can combine with other data augmentation methods, such as dropedge, and sampling algorithms. It would be better if the authors can also try these combinations, instead of only adopting their proposed dropnode augmentation. Thus, it would be better if the authors can provide a curve showing the performance of the proposed framework against other baselines under different training data percentage. Also, better to combine these methods with some advanced base GNN.



Graph Random Neural Networks for Semi-Supervised Learning on Graphs

Neural Information Processing Systems

We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored. However, most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce. In this paper, we propose a simple yet effective framework--GRAPH RANDOM NEURAL NETWORKS (GRAND)--to address these issues. In GRAND, we first design a random propagation strategy to perform graph data augmentation. Extensive experiments on graph benchmark datasets suggest that GRAND significantly outperforms state-of- the-art GNN baselines on semi-supervised node classification.